我们提出了一种名为ACLNET的新型深度学习模型,用于从地面图像中分割云。ACLNET同时使用深神经网络和机器学习(ML)算法来提取互补功能。具体而言,它使用有效网络-B0作为骨干,“``trous tos blacial pyramid boming''(ASPP)在多个接受场上学习,并从图像中提取细节细节。ACLNET还使用K-均值聚类来更精确地提取云边界。ACLNET对白天和夜间图像都有效。它提供的错误率较低,较高的召回率和更高的F1得分比Art最先进的云分割模型。ACLNET的源代码可在此处获得:https://github.com/ckmvigil/aclnet。
translated by 谷歌翻译
随着半导体晶片的整合密度和设计的复杂性的增加,它们中缺陷的幅度和复杂性也在上升。由于对晶圆缺陷的手动检查是昂贵的,因此高度需要基于自动的人工智能(AI)计算机视觉方法。先前关于缺陷分析的作品具有多个局限性,例如准确性低以及对分类和分割的单独模型的需求。为了分析混合型缺陷,一些以前的作品需要为每种缺陷类型分别训练一个模型,这是不可估计的。在本文中,我们介绍了基于编码器架构的新型网络WafersegClassnet(WSCN)。 WSCN执行单个和混合型晶圆缺陷的同时分类和分割。 WSCN使用“共享编码器”进行分类和细分,允许训练WSCN端到端。我们使用N-PAIR对比度损失首先预处理编码器,然后使用BCE-DICE损失进行分割,并进行分类的分类横向损失。使用N-PAIR对比度损失有助于更好地嵌入晶圆图的潜在维度。 WSCN的模型大小仅为0.51MB,仅执行0.2m的拖鞋。因此,它比其他最先进的型号轻得多。同样,它仅需要150个时期才能收敛,而先前的工作需要4,000个时代。我们在具有38,015张图像的混合WM38数据集上评估了我们的模型。 WSCN的平均分类精度为98.2%,骰子系数为0.9999。我们是第一个在混合WM38数据集上显示分割结果的人。可以从https://github.com/ckmvigil/wafersegclassnet获得源代码。
translated by 谷歌翻译
图表卷积网络(GCNS)广泛应用于许多应用,但仍需要大量标记的培训数据。此外,GCNS的邻接矩阵是稳定的,这使得数据处理策略无法有效地调整来自内置的图形结构的训练数据的数量。从本文中进一步提高了GCN的性能和自学能力,我们提出在一个区域(rrlfsor).rrlfsor上的有效删除的gcns的高效自我监督的GCNS的学习策略(RRLFSOR).rrlfsor可以被视为新的数据增强器来改进过度平滑。在两个有效和代表性的GCN模型上检查了rrlfsor使用三个公开引文数据集 - 科拉,Pubmed和CiteSeer。转换链路预测任务的实验表明,在三个基准数据集的准确性方面,我们的策略始终如一地始终如一的基线模型。
translated by 谷歌翻译
Reinforcement learning can enable robots to navigate to distant goals while optimizing user-specified reward functions, including preferences for following lanes, staying on paved paths, or avoiding freshly mowed grass. However, online learning from trial-and-error for real-world robots is logistically challenging, and methods that instead can utilize existing datasets of robotic navigation data could be significantly more scalable and enable broader generalization. In this paper, we present ReViND, the first offline RL system for robotic navigation that can leverage previously collected data to optimize user-specified reward functions in the real-world. We evaluate our system for off-road navigation without any additional data collection or fine-tuning, and show that it can navigate to distant goals using only offline training from this dataset, and exhibit behaviors that qualitatively differ based on the user-specified reward function.
translated by 谷歌翻译
We propose an approach for semantic imitation, which uses demonstrations from a source domain, e.g. human videos, to accelerate reinforcement learning (RL) in a different target domain, e.g. a robotic manipulator in a simulated kitchen. Instead of imitating low-level actions like joint velocities, our approach imitates the sequence of demonstrated semantic skills like "opening the microwave" or "turning on the stove". This allows us to transfer demonstrations across environments (e.g. real-world to simulated kitchen) and agent embodiments (e.g. bimanual human demonstration to robotic arm). We evaluate on three challenging cross-domain learning problems and match the performance of demonstration-accelerated RL approaches that require in-domain demonstrations. In a simulated kitchen environment, our approach learns long-horizon robot manipulation tasks, using less than 3 minutes of human video demonstrations from a real-world kitchen. This enables scaling robot learning via the reuse of demonstrations, e.g. collected as human videos, for learning in any number of target domains.
translated by 谷歌翻译
Navigation is one of the most heavily studied problems in robotics, and is conventionally approached as a geometric mapping and planning problem. However, real-world navigation presents a complex set of physical challenges that defies simple geometric abstractions. Machine learning offers a promising way to go beyond geometry and conventional planning, allowing for navigational systems that make decisions based on actual prior experience. Such systems can reason about traversability in ways that go beyond geometry, accounting for the physical outcomes of their actions and exploiting patterns in real-world environments. They can also improve as more data is collected, potentially providing a powerful network effect. In this article, we present a general toolkit for experiential learning of robotic navigation skills that unifies several recent approaches, describe the underlying design principles, summarize experimental results from several of our recent papers, and discuss open problems and directions for future work.
translated by 谷歌翻译
Multi-Scale and U-shaped Networks are widely used in various image restoration problems, including deblurring. Keeping in mind the wide range of applications, we present a comparison of these architectures and their effects on image deblurring. We also introduce a new block called as NFResblock. It consists of a Fast Fourier Transformation layer and a series of modified Non-Linear Activation Free Blocks. Based on these architectures and additions, we introduce NFResnet and NFResnet+, which are modified multi-scale and U-Net architectures, respectively. We also use three different loss functions to train these architectures: Charbonnier Loss, Edge Loss, and Frequency Reconstruction Loss. Extensive experiments on the Deep Video Deblurring dataset, along with ablation studies for each component, have been presented in this paper. The proposed architectures achieve a considerable increase in Peak Signal to Noise (PSNR) ratio and Structural Similarity Index (SSIM) value.
translated by 谷歌翻译
Iterative text revision improves text quality by fixing grammatical errors, rephrasing for better readability or contextual appropriateness, or reorganizing sentence structures throughout a document. Most recent research has focused on understanding and classifying different types of edits in the iterative revision process from human-written text instead of building accurate and robust systems for iterative text revision. In this work, we aim to build an end-to-end text revision system that can iteratively generate helpful edits by explicitly detecting editable spans (where-to-edit) with their corresponding edit intents and then instructing a revision model to revise the detected edit spans. Leveraging datasets from other related text editing NLP tasks, combined with the specification of editable spans, leads our system to more accurately model the process of iterative text refinement, as evidenced by empirical results and human evaluations. Our system significantly outperforms previous baselines on our text revision tasks and other standard text revision tasks, including grammatical error correction, text simplification, sentence fusion, and style transfer. Through extensive qualitative and quantitative analysis, we make vital connections between edit intentions and writing quality, and better computational modeling of iterative text revisions.
translated by 谷歌翻译
Semantic navigation is necessary to deploy mobile robots in uncontrolled environments like our homes, schools, and hospitals. Many learning-based approaches have been proposed in response to the lack of semantic understanding of the classical pipeline for spatial navigation, which builds a geometric map using depth sensors and plans to reach point goals. Broadly, end-to-end learning approaches reactively map sensor inputs to actions with deep neural networks, while modular learning approaches enrich the classical pipeline with learning-based semantic sensing and exploration. But learned visual navigation policies have predominantly been evaluated in simulation. How well do different classes of methods work on a robot? We present a large-scale empirical study of semantic visual navigation methods comparing representative methods from classical, modular, and end-to-end learning approaches across six homes with no prior experience, maps, or instrumentation. We find that modular learning works well in the real world, attaining a 90% success rate. In contrast, end-to-end learning does not, dropping from 77% simulation to 23% real-world success rate due to a large image domain gap between simulation and reality. For practitioners, we show that modular learning is a reliable approach to navigate to objects: modularity and abstraction in policy design enable Sim-to-Real transfer. For researchers, we identify two key issues that prevent today's simulators from being reliable evaluation benchmarks - (A) a large Sim-to-Real gap in images and (B) a disconnect between simulation and real-world error modes - and propose concrete steps forward.
translated by 谷歌翻译
We consider the problem of embodied visual navigation given an image-goal (ImageNav) where an agent is initialized in an unfamiliar environment and tasked with navigating to a location 'described' by an image. Unlike related navigation tasks, ImageNav does not have a standardized task definition which makes comparison across methods difficult. Further, existing formulations have two problematic properties; (1) image-goals are sampled from random locations which can lead to ambiguity (e.g., looking at walls), and (2) image-goals match the camera specification and embodiment of the agent; this rigidity is limiting when considering user-driven downstream applications. We present the Instance-specific ImageNav task (InstanceImageNav) to address these limitations. Specifically, the goal image is 'focused' on some particular object instance in the scene and is taken with camera parameters independent of the agent. We instantiate InstanceImageNav in the Habitat Simulator using scenes from the Habitat-Matterport3D dataset (HM3D) and release a standardized benchmark to measure community progress.
translated by 谷歌翻译